151 research outputs found

    Parameterizing by the Number of Numbers

    Full text link
    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called, "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for ILPF to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable

    The Computational Complexity of the Game of Set and its Theoretical Applications

    Full text link
    The game of SET is a popular card game in which the objective is to form Sets using cards from a special deck. In this paper we study single- and multi-round variations of this game from the computational complexity point of view and establish interesting connections with other classical computational problems. Specifically, we first show that a natural generalization of the problem of finding a single Set, parameterized by the size of the sought Set is W-hard; our reduction applies also to a natural parameterization of Perfect Multi-Dimensional Matching, a result which may be of independent interest. Second, we observe that a version of the game where one seeks to find the largest possible number of disjoint Sets from a given set of cards is a special case of 3-Set Packing; we establish that this restriction remains NP-complete. Similarly, the version where one seeks to find the smallest number of disjoint Sets that overlap all possible Sets is shown to be NP-complete, through a close connection to the Independent Edge Dominating Set problem. Finally, we study a 2-player version of the game, for which we show a close connection to Arc Kayles, as well as fixed-parameter tractability when parameterized by the number of rounds played

    Monomial Testing and Applications

    Full text link
    In this paper, we devise two algorithms for the problem of testing qq-monomials of degree kk in any multivariate polynomial represented by a circuit, regardless of the primality of qq. One is an O(2k)O^*(2^k) time randomized algorithm. The other is an O(12.8k)O^*(12.8^k) time deterministic algorithm for the same qq-monomial testing problem but requiring the polynomials to be represented by tree-like circuits. Several applications of qq-monomial testing are also given, including a deterministic O(12.8mk)O^*(12.8^{mk}) upper bound for the mm-set kk-packing problem.Comment: 17 pages, 4 figures, submitted FAW-AAIM 2013. arXiv admin note: substantial text overlap with arXiv:1302.5898; and text overlap with arXiv:1007.2675, arXiv:1007.2678, arXiv:1007.2673 by other author

    Streaming Kernelization

    Full text link
    Kernelization is a formalization of preprocessing for combinatorially hard problems. We modify the standard definition for kernelization, which allows any polynomial-time algorithm for the preprocessing, by requiring instead that the preprocessing runs in a streaming setting and uses O(poly(k)logx)\mathcal{O}(poly(k)\log|x|) bits of memory on instances (x,k)(x,k). We obtain several results in this new setting, depending on the number of passes over the input that such a streaming kernelization is allowed to make. Edge Dominating Set turns out as an interesting example because it has no single-pass kernelization but two passes over the input suffice to match the bounds of the best standard kernelization

    Open Problems in Parameterized and Exact Computation - IWPEC 2006

    Get PDF
    In September 2006, the Second International Workshop on Parameterized and Exact Computation was held in Zürich, Switzerland, as part of ALGO 2006. At the end of IWPEC 2006, a problem session was held. (Most of) the problems mentioned at this problem session, and some other problems, contributed by the participants of IWPEC 2006 are listed here

    Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter

    Get PDF
    An important result in the study of polynomial-time preprocessing shows that there is an algorithm which given an instance (G,k) of Vertex Cover outputs an equivalent instance (G',k') in polynomial time with the guarantee that G' has at most 2k' vertices (and thus O((k')^2) edges) with k' <= k. Using the terminology of parameterized complexity we say that k-Vertex Cover has a kernel with 2k vertices. There is complexity-theoretic evidence that both 2k vertices and Theta(k^2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size fvs(G) of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number vc(G) since fvs(G) <= vc(G) and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G',X',k') such that |V(G')| <= 2k and |V(G')| <= O(|X'|^3). A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have a polynomial kernel when parameterized by the cardinality of a given vertex cover of the graph unless NP is in coNP/poly and the polynomial hierarchy collapses to the third level.Comment: Published in "Theory of Computing Systems" as an Open Access publicatio

    Expanding the expressive power of Monadic Second-Order logic on restricted graph classes

    Full text link
    We combine integer linear programming and recent advances in Monadic Second-Order model checking to obtain two new algorithmic meta-theorems for graphs of bounded vertex-cover. The first shows that cardMSO1, an extension of the well-known Monadic Second-Order logic by the addition of cardinality constraints, can be solved in FPT time parameterized by vertex cover. The second meta-theorem shows that the MSO partitioning problems introduced by Rao can also be solved in FPT time with the same parameter. The significance of our contribution stems from the fact that these formalisms can describe problems which are W[1]-hard and even NP-hard on graphs of bounded tree-width. Additionally, our algorithms have only an elementary dependence on the parameter and formula. We also show that both results are easily extended from vertex cover to neighborhood diversity.Comment: Accepted for IWOCA 201

    Parameterized Algorithms for Modular-Width

    Full text link
    It is known that a number of natural graph problems which are FPT parameterized by treewidth become W-hard when parameterized by clique-width. It is therefore desirable to find a different structural graph parameter which is as general as possible, covers dense graphs but does not incur such a heavy algorithmic penalty. The main contribution of this paper is to consider a parameter called modular-width, defined using the well-known notion of modular decompositions. Using a combination of ILPs and dynamic programming we manage to design FPT algorithms for Coloring and Partitioning into paths (and hence Hamiltonian path and Hamiltonian cycle), which are W-hard for both clique-width and its recently introduced restriction, shrub-depth. We thus argue that modular-width occupies a sweet spot as a graph parameter, generalizing several simpler notions on dense graphs but still evading the "price of generality" paid by clique-width.Comment: to appear in IPEC 2013. arXiv admin note: text overlap with arXiv:1304.5479 by other author

    Claw-free t-perfect graphs can be recognised in polynomial time

    Full text link
    A graph is called t-perfect if its stable set polytope is defined by non-negativity, edge and odd-cycle inequalities. We show that it can be decided in polynomial time whether a given claw-free graph is t-perfect

    Parameterized Approximation Schemes using Graph Widths

    Full text link
    Combining the techniques of approximation algorithms and parameterized complexity has long been considered a promising research area, but relatively few results are currently known. In this paper we study the parameterized approximability of a number of problems which are known to be hard to solve exactly when parameterized by treewidth or clique-width. Our main contribution is to present a natural randomized rounding technique that extends well-known ideas and can be used for both of these widths. Applying this very generic technique we obtain approximation schemes for a number of problems, evading both polynomial-time inapproximability and parameterized intractability bounds
    corecore